Internalize how data organization + algorithmic paradigms shape performance. Focus on mental models, complexity trade‑offs, and implementation patterns rather than rote memorization.
Asymptotic reasoning is a comparative tool, not a stopwatch.
f=o(g))| Need | Best Candidates | Avoid When | Notes |
|---|---|---|---|
| Fast membership test | Hash Set / Bloom Filter | Adversarial keys (hash attacks) | Bloom = probabilistic (false positives) |
| Sorted iteration | Balanced BST / Heap (partial) | Random-only access | Heaps only guarantee top element |
| Min/Max streaming | Heap / Monotonic Queue | Need arbitrary removal | Monotonic queue for sliding windows |
| FIFO multi-producer | Queue / Ring Buffer | Random deletions required | Ring buffer for fixed capacity |
| LRU cache | Hash Map + Doubly List | Massive key space w/ low reuse | O(1) get + put eviction |
Understand shape, operations, and cost model.
Contiguous memory, O(1) index, expensive middle inserts (O(n)). Great for iteration & cache locality.
Node chain w/ pointers. O(1) head insert; O(n) index; pointer overhead, poor cache locality.
LIFO: push/pop O(1). Backtracking, parsing, DFS recursion emulation. Array or linked list backed.
FIFO: enqueue/dequeue O(1). Scheduling, BFS. Use circular buffer to avoid shifts.
Average O(1) put/get; degrade to O(n) worst if collisions unmitigated. Good hashing & resizing crucial.
Ordered structure. Average O(log n) ops; unbalanced = O(n). Self-balancing (AVL, Red-Black) maintain log height.
Partial order: root min/max. Insert & extract O(log n). Useful for priority scheduling & streaming kth.
Entities (nodes) + relations (edges). Choose adjacency list vs matrix based on density.
String keyed; O(L) lookup by length. Memory heavy; compress with radix / DAWG.
Patterns > isolated tricks. Generalize transformations.
// Merge sort: T(n) = 2T(n/2) + O(n) => O(n log n)
// Binary search: T(n) = T(n/2) + O(1) => O(log n)
// Tower of Hanoi: T(n) = 2T(n-1) + 1 => O(2^n)
// Master Theorem forms aid direct classification
Experiment with parameters to build intuition.
Big-O (average unless noted).
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| Dynamic Array (amort.) | O(1) | O(n) | O(1) | O(n) |
| Singly Linked List | O(n) | O(n) | O(1)* | O(1)* |
| Hash Table | - | O(1) | O(1) | O(1) |
| BST (balanced) | O(log n) | O(log n) | O(log n) | O(log n) |
| Heap | O(n) | O(n) | O(log n) | O(log n) |
| Trie | O(L) | O(L) | O(L) | O(L) |
Track your progress deliberately.
In-place (better cache locality), lower constant factors, no extra O(n) memory. Pivot quality influences performance.
Backtracking is structured recursion where you explore a decision space, undo (backtrack) partial choices to search alternatives systematically.
When you only need top k elements or a streaming running min/max. Reduces cost from sorting all n to maintaining size k (O(n log k)).
DP often refers to bottom-up table fill order; memoization is top-down caching. Both exploit overlapping subproblems; choice influenced by natural recurrence direction & need for full table.